37,897 research outputs found

    On Minimizing Data-read and Download for Storage-Node Recovery

    Full text link
    We consider the problem of efficient recovery of the data stored in any individual node of a distributed storage system, from the rest of the nodes. Applications include handling failures and degraded reads. We measure efficiency in terms of the amount of data-read and the download required. To minimize the download, we focus on the minimum bandwidth setting of the 'regenerating codes' model for distributed storage. Under this model, the system has a total of n nodes, and the data stored in any node must be (efficiently) recoverable from any d of the other (n-1) nodes. Lower bounds on the two metrics under this model were derived previously; it has also been shown that these bounds are achievable for the amount of data-read and download when d=n-1, and for the amount of download alone when d<n-1. In this paper, we complete this picture by proving the converse result, that when d<n-1, these lower bounds are strictly loose with respect to the amount of read required. The proof is information-theoretic, and hence applies to non-linear codes as well. We also show that under two (practical) relaxations of the problem setting, these lower bounds can be met for both read and download simultaneously.Comment: IEEE Communications Letter

    Performance of the split-symbol moments SNR estimator in the presence of inter-symbol interference

    Get PDF
    The Split-Symbol Moments Estimator (SSME) is an algorithm that is designed to estimate symbol signal-to-noise ratio (SNR) in the presence of additive white Gaussian noise (AWGN). The performance of the SSME algorithm in band-limited channels is examined. The effects of the resulting inter-symbol interference (ISI) are quantified. All results obtained are in closed form and can be easily evaluated numerically for performance prediction purposes. Furthermore, they are validated through digital simulations

    Ribosomal trafficking is reduced in Schwann cells following induction of myelination.

    Get PDF
    Local synthesis of proteins within the Schwann cell periphery is extremely important for efficient process extension and myelination, when cells undergo dramatic changes in polarity and geometry. Still, it is unclear how ribosomal distributions are developed and maintained within Schwann cell projections to sustain local translation. In this multi-disciplinary study, we expressed a plasmid encoding a fluorescently labeled ribosomal subunit (L4-GFP) in cultured primary rat Schwann cells. This enabled the generation of high-resolution, quantitative data on ribosomal distributions and trafficking dynamics within Schwann cells during early stages of myelination, induced by ascorbic acid treatment. Ribosomes were distributed throughout Schwann cell projections, with ~2-3 bright clusters along each projection. Clusters emerged within 1 day of culture and were maintained throughout early stages of myelination. Three days after induction of myelination, net ribosomal movement remained anterograde (directed away from the Schwann cell body), but ribosomal velocity decreased to about half the levels of the untreated group. Statistical and modeling analysis provided additional insight into key factors underlying ribosomal trafficking. Multiple regression analysis indicated that net transport at early time points was dependent on anterograde velocity, but shifted to dependence on anterograde duration at later time points. A simple, data-driven rate kinetics model suggested that the observed decrease in net ribosomal movement was primarily dictated by an increased conversion of anterograde particles to stationary particles, rather than changes in other directional parameters. These results reveal the strength of a combined experimental and theoretical approach in examining protein localization and transport, and provide evidence of an early establishment of ribosomal populations within Schwann cell projections with a reduction in trafficking following initiation of myelination

    QPSK carrier-acquisition performance in the advanced receiver 2

    Get PDF
    The frequency-acquisition performance of the Costas cross-over loop which is used in the Advanced Receiver 2 (ARX 2) to perform Quadrature Phase Shift Keying (QPSK) carrier tracking is described. The performance of the Costas cross-over loop is compared to two other QPSK carrier tracking loops: the MAP estimation loop and the generalized Costas loop. Acquisition times and probabilities of acquisition as functions of both loop signal-to-noise ratio and frequency-offset to loop-bandwidth ratio are obtained using computer simulations for both type-2 and type-3 loops. It is shown that even though the MAP loop results in the smallest squaring loss for all signal-to-noise ratios, the MAP loop is sometimes outperformed by the other two loops in terms of acquisition time and probability

    Experimental Study of Remote Job Submission and Execution on LRM through Grid Computing Mechanisms

    Full text link
    Remote job submission and execution is fundamental requirement of distributed computing done using Cluster computing. However, Cluster computing limits usage within a single organization. Grid computing environment can allow use of resources for remote job execution that are available in other organizations. This paper discusses concepts of batch-job execution using LRM and using Grid. The paper discusses two ways of preparing test Grid computing environment that we use for experimental testing of concepts. This paper presents experimental testing of remote job submission and execution mechanisms through LRM specific way and Grid computing ways. Moreover, the paper also discusses various problems faced while working with Grid computing environment and discusses their trouble-shootings. The understanding and experimental testing presented in this paper would become very useful to researchers who are new to the field of job management in Grid.Comment: Fourth International Conference on Advanced Computing & Communication Technologies (ACCT), 201

    When Do Redundant Requests Reduce Latency ?

    Full text link
    Several systems possess the flexibility to serve requests in more than one way. For instance, a distributed storage system storing multiple replicas of the data can serve a request from any of the multiple servers that store the requested data, or a computational task may be performed in a compute-cluster by any one of multiple processors. In such systems, the latency of serving the requests may potentially be reduced by sending "redundant requests": a request may be sent to more servers than needed, and it is deemed served when the requisite number of servers complete service. Such a mechanism trades off the possibility of faster execution of at least one copy of the request with the increase in the delay due to an increased load on the system. Due to this tradeoff, it is unclear when redundant requests may actually help. Several recent works empirically evaluate the latency performance of redundant requests in diverse settings. This work aims at an analytical study of the latency performance of redundant requests, with the primary goals of characterizing under what scenarios sending redundant requests will help (and under what scenarios they will not help), as well as designing optimal redundant-requesting policies. We first present a model that captures the key features of such systems. We show that when service times are i.i.d. memoryless or "heavier", and when the additional copies of already-completed jobs can be removed instantly, redundant requests reduce the average latency. On the other hand, when service times are "lighter" or when service times are memoryless and removal of jobs is not instantaneous, then not having any redundancy in the requests is optimal under high loads. Our results hold for arbitrary arrival processes.Comment: Extended version of paper presented at Allerton Conference 201
    • …
    corecore